23 research outputs found

    One-Bit ExpanderSketch for One-Bit Compressed Sensing

    Full text link
    Is it possible to obliviously construct a set of hyperplanes H such that you can approximate a unit vector x when you are given the side on which the vector lies with respect to every h in H? In the sparse recovery literature, where x is approximately k-sparse, this problem is called one-bit compressed sensing and has received a fair amount of attention the last decade. In this paper we obtain the first scheme that achieves almost optimal measurements and sublinear decoding time for one-bit compressed sensing in the non-uniform case. For a large range of parameters, we improve the state of the art in both the number of measurements and the decoding time

    Sublinear-Time Algorithms for Compressive Phase Retrieval

    Full text link
    In the compressive phase retrieval problem, or phaseless compressed sensing, or compressed sensing from intensity only measurements, the goal is to reconstruct a sparse or approximately kk-sparse vector xRnx \in \mathbb{R}^n given access to y=Φxy= |\Phi x|, where v|v| denotes the vector obtained from taking the absolute value of vRnv\in\mathbb{R}^n coordinate-wise. In this paper we present sublinear-time algorithms for different variants of the compressive phase retrieval problem which are akin to the variants considered for the classical compressive sensing problem in theoretical computer science. Our algorithms use pure combinatorial techniques and near-optimal number of measurements.Comment: The ell_2/ell_2 algorithm was substituted by a modification of the ell_infty/ell_2 algorithm which strictly subsumes i

    On Fast Decoding of High-Dimensional Signals from One-Bit Measurements

    Get PDF
    In the problem of one-bit compressed sensing, the goal is to find a delta-close estimation of a k-sparse vector x in R^n given the signs of the entries of y = Phi x, where Phi is called the measurement matrix. For the one-bit compressed sensing problem, previous work [Plan, 2013][Gopi, 2013] achieved Theta (delta^{-2} k log(n/k)) and O~( 1/delta k log (n/k)) measurements, respectively, but the decoding time was Omega ( n k log (n/k)). In this paper, using tools and techniques developed in the context of two-stage group testing and streaming algorithms, we contribute towards the direction of sub-linear decoding time. We give a variety of schemes for the different versions of one-bit compressed sensing, such as the for-each and for-all versions, and for support recovery; all these have at most a log k overhead in the number of measurements and poly(k, log n) decoding time, which is an exponential improvement over previous work, in terms of the dependence on n

    Nearly Optimal Sparse Polynomial Multiplication

    Full text link
    In the sparse polynomial multiplication problem, one is asked to multiply two sparse polynomials f and g in time that is proportional to the size of the input plus the size of the output. The polynomials are given via lists of their coefficients F and G, respectively. Cole and Hariharan (STOC 02) have given a nearly optimal algorithm when the coefficients are positive, and Arnold and Roche (ISSAC 15) devised an algorithm running in time proportional to the "structural sparsity" of the product, i.e. the set supp(F)+supp(G). The latter algorithm is particularly efficient when there not "too many cancellations" of coefficients in the product. In this work we give a clean, nearly optimal algorithm for the sparse polynomial multiplication problem.Comment: Accepted to IEEE Transactions on Information Theor

    Fast n-Fold Boolean Convolution via Additive Combinatorics

    Get PDF
    We consider the problem of computing the Boolean convolution (with wraparound) of nn~vectors of dimension mm, or, equivalently, the problem of computing the sumset A1+A2++AnA_1+A_2+\ldots+A_n for A1,,AnZmA_1,\ldots,A_n \subseteq \mathbb{Z}_m. Boolean convolution formalizes the frequent task of combining two subproblems, where the whole problem has a solution of size kk if for some ii the first subproblem has a solution of size~ii and the second subproblem has a solution of size kik-i. Our problem formalizes a natural generalization, namely combining solutions of nn subproblems subject to a modular constraint. This simultaneously generalises Modular Subset Sum and Boolean Convolution (Sumset Computation). Although nearly optimal algorithms are known for special cases of this problem, not even tiny improvements are known for the general case. We almost resolve the computational complexity of this problem, shaving essentially a factor of nn from the running time of previous algorithms. Specifically, we present a \emph{deterministic} algorithm running in \emph{almost} linear time with respect to the input plus output size kk. We also present a \emph{Las Vegas} algorithm running in \emph{nearly} linear expected time with respect to the input plus output size kk. Previously, no deterministic or randomized o(nk)o(nk) algorithm was known. At the heart of our approach lies a careful usage of Kneser's theorem from Additive Combinatorics, and a new deterministic almost linear output-sensitive algorithm for non-negative sparse convolution. In total, our work builds a solid toolbox that could be of independent interest

    Deterministic Sparse Fourier Transform with an ?_{?} Guarantee

    Get PDF
    In this paper we revisit the deterministic version of the Sparse Fourier Transform problem, which asks to read only a few entries of xCnx \in \mathbb{C}^n and design a recovery algorithm such that the output of the algorithm approximates x^\hat x, the Discrete Fourier Transform (DFT) of xx. The randomized case has been well-understood, while the main work in the deterministic case is that of Merhi et al.\@ (J Fourier Anal Appl 2018), which obtains O(k2log1klog5.5n)O(k^2 \log^{-1}k \cdot \log^{5.5}n) samples and a similar runtime with the 2/1\ell_2/\ell_1 guarantee. We focus on the stronger /1\ell_{\infty}/\ell_1 guarantee and the closely related problem of incoherent matrices. We list our contributions as follows. 1. We find a deterministic collection of O(k2logn)O(k^2 \log n) samples for the /1\ell_\infty/\ell_1 recovery in time O(nklog2n)O(nk \log^2 n), and a deterministic collection of O(k2log2n)O(k^2 \log^2 n) samples for the /1\ell_\infty/\ell_1 sparse recovery in time O(k2log3n)O(k^2 \log^3n). 2. We give new deterministic constructions of incoherent matrices that are row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's inequality and bounds on exponential sums considered in analytic number theory. Our first construction matches a previous randomized construction of Nelson, Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of the incoherent matrix. Our algorithms are nearly sample-optimal, since a lower bound of Ω(k2+klogn)\Omega(k^2 + k \log n) is known, even for the case where the sensing matrix can be arbitrarily designed. A similar lower bound of Ω(k2logn/logk)\Omega(k^2 \log n/ \log k) is known for incoherent matrices.Comment: ICALP 2020--presentation improved according to reviewers' comment

    Deterministic Heavy Hitters with Sublinear Query Time

    Get PDF
    We study the classic problem of finding l_1 heavy hitters in the streaming model. In the general turnstile model, we give the first deterministic sublinear-time sketching algorithm which takes a linear sketch of length O(epsilon^{-2} log n * log^*(epsilon^{-1})), which is only a factor of log^*(epsilon^{-1}) more than the best existing polynomial-time sketching algorithm (Nelson et al., RANDOM \u2712). Our approach is based on an iterative procedure, where most unrecovered heavy hitters are identified in each iteration. Although this technique has been extensively employed in the related problem of sparse recovery, this is the first time, to the best of our knowledge, that it has been used in the context of heavy hitters. Along the way we also obtain a sublinear time algorithm for the closely related problem of the l_1/l_1 compressed sensing, matching the space usage of previous (super-)linear time algorithms. In the strict turnstile model, we show that the runtime can be improved and the sketching matrix can be made strongly explicit with O(epsilon^{-2}log^3 n/log^3(1/epsilon)) rows
    corecore